Our commitment to protecting our customers is unwavering. We use AI to proactively detect threats and scams, as well as alert our customers to potential risks and disinformation. In doing so, we leverage AI responsibly, in line with the White House voluntary AI commitments and other global AI safety initiatives.
Partnering for Responsible AI Practices
In 2024, we joined leading tech companies such as Microsoft, Adobe, Google, IBM, Meta, and TikTok in signing the Tech Accord to Combat Deceptive Use of Al in 2024 Elections, to play our part in protecting elections and the electoral process.
Collectively, we’ll bring our respective powers to combat deepfakes and other harmful uses of AI. Deepfakes can deceptively manipulate digital media to alter the audio, video, or images of political candidates, election officials, and other public figures during democratic elections. They can even create content that includes false information about when, where, and how people can cast their votes.
We find this Tech Accord a crucial first step to keep people safe from harmful AI-generated content during the election year. As AI continues to shape and reshape what we see and hear online, it is important to continue working together to protect people from AI-powered scams and disinformation.
A set of seven principles guide the way for the Tech Accord to Combat Deceptive Use of Al in 2024 Elections, with each signatory of the pledge lending their strengths to the cause:
- Prevention: Researching, investing in, and/or deploying reasonable precautions to limit risks of deliberately deceptive AI election content being generated.
- Provenance: Attaching provenance signals to identify the origin of content where appropriate and technically feasible.
- Detection: We will be transparent in our use of AI, adhering to data privacy laws and regulations, to the utmost extent.
- Responsive Protection: Providing swift and proportionate responses to incidents involving the creation and dissemination of deceptive AI election content.
- Evaluation: Undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing with deceptive AI election content.
- Public Awareness: Engaging in shared efforts to educate the public about media literacy best practices, in particular regarding deceptive AI content and ways citizens can protect themselves from being manipulated or deceived by this content.
- Resilience: Supporting efforts to develop and make available defensive tools and resources, such as AI literacy and other public programs, AI-based solutions (including open source tools where appropriate), or contextual features, to help protect public debate, defend the integrity of the democratic process, and build whole-of-society resilience against the use of deceptive AI election content.
Protecting people from the ill use of AI calls for commitment from all corners. We face an imposing but surmountable global challenge. Collectively, we can safeguard AI from misuse to keep people safer. This excerpt from the accord we co-signed puts it well: “The protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders.”
We’re proud to say that we’ll contribute to that goal with everything we can bring to bear.
Our Methods and Technologies
Our advanced AI-powered protection employs several methods and techniques that adhere to strict security protocols. The following is a list of some of our techniques:
- We limit the scope of the AI to operate only on specific threat-related use cases in which it has been thoroughly trained. This increases the accuracy and security of our solutions.
- Our protection architecture uses a balance between AI and threat intelligence. We rely on concrete threat intelligence when it exists and use AI only when it helps us achieve the best results. This hybrid approach ensures that we deliver the highest level of protection while reducing risk.
- Our patented and patent-pending technology is designed to reduce inaccurate and inconsistent results. While no AI is 100% free of errors, we aim to mitigate risks and errors as much as possible. We double-check our AI model outcomes, actively monitor telemetry, and consistently enhance the accuracy and quality of our AI solutions.
- When training our AI models, we’re meticulous and thoughtful about the quality and relevance of our data. We use human-based cybersecurity data focused on threat detection use cases. We also use synthetic data created by algorithms to improve our AI models and make them resilient against adversarial attacks.
- To ensure your privacy while protecting you from time-sensitive threats, we run AI locally on your device, as much as possible. For cloud-based AI, we use only the minimum amount of data needed for the protection of your devices.
- We ensure when we develop and use AI models that we adhere to all relevant data privacy and AI laws and regulations. We care deeply about privacy, security, and online safety, all of which are a significant part of our essential mission: to protect users of our products and services from the risks of theft, disruption, and unauthorized access to their online information and activities.
McAfee is dedicated to pioneering the future of AI security technology, ensuring its responsible use to protect people from the threats of today and tomorrow.
More AI News
McAfee Smart AI™ — your defense against AI scams and disinformation
The rise in AI-generated content has made it difficult to determine what’s real from what’s fake. In a digital world where seeing is no longer believing, McAfee Smart AI™ is your defense against AI-generated scams and election disinformation... Read more
Deepfake Detector technology defends against scams and misinformation
Sophisticated AI tools have made it easier for cybercriminals to create highly convincing scams by cloning voices and manipulating authentic videos. They can make it appear as if someone you trust or a public figure said something different from what they actually said. McAfee Deepfake Detector is your defense against malicious and misleading deepfakes... Read more